How does TDD look like in real world development of microservices or web interfaces? That's what this series is all about. | |||
---|---|---|---|
1. Introduction | 2. The requirements | 3. Test case analysis I | 4. Test case analysis II |
Test case analysis III | 6. Test case analysis IV | 7. Setting up a project with Spring Boot | 8. Which tests to write? |
9. API design |
When we set up the project, we ran a couple of tests (integration and unit) to see if the environment we have runs them. There was no actual logic involved, but we got initial feedback that the platform is ready to work as-is.
The next logical step is to set up a couple of real tests. But what kind of tests should we start with, and how do they fit our TDD effort?
The right stuff
In our test analysis, we ran through a lot of cases, and some of them can serve as acceptance tests. As acceptance tests, they prove that the system works for the customer as specified. As such they would operate the UI and the entire system.
The main issue with testing through the UI is volatility – the UI tends to change a lot more than functionality. Which will cause us to change our tests frequently – a big effort, with relatively small value.
Another issue with testing through the UI is that we add another layer (or more) for the tests to pass through. Although they give as a “thumbs up” when they pass, we’re adding more pitfalls for the tests, to fail for the wrong reason.
It’s true that integration tests can fail because of everything in that black box we write. But our UI tests, can fail because of more network issues, security and configuration between the UI and the API.
What happens then?
Let’s talk for a minute about the cost of “maintaining the tests”. Although the test failure is the signal, what happens next is we start investigating everything – the tests, the code, the environment they ran, the build system, and other things as well. This takes time (sometimes a lot). If we find out the tests failed for “the right reason” – the code doesn’t do what we intended it to do – that’s good. They fulfilled their job.
If the tests failed for any other reason – tests are no longer correct, environment issues, missing files that were not added to source control – we call that “waste”. This is not just a waste of time – these failures erode our trust in the tests we have, and reduce our motivation to keep writing them. On top of the wasted investigation time, it sometimes results with homework: Fix the tests, add the file, re-configure the environment.
If maintenance cost is high in integration and system tests, why not just skip all those and just write unit tests? They wouldn’t fail for the wrong reason – they are isolated, they test very small scope, and they are cheap to write.
If we only wrote unit tests, that don’t check the whole system, we won’t have a proof that the system actually works. So as we write tests, selecting the right type of tests becomes a balancing act. The testing pyramid model describes the resulting set of tests.
Ok, so far we agree that we don’t want UI tests, and we’ll need to select API and unit tests. How is that related to TDD?
The Big, The Small and the Ugly
Test-first adds another dimension to the game. In order to use TDD to make progress quickly, we get pulled toward unit tests. Using TDD from the API level is possible, but we won’t move as quickly. We’ll need to build APIs and call them in the tests, add beans to configurations, setup dependencies, all slower than writing tests for the code itself. In TDD we expect very short cycles of development. Every couple of minutes we have another passing tests. Going outside to the API level, depending on how complex the system is, the cycle can turn to hours (and sometimes days, I’ve seen it) between passing tests.
On the other hand, we still want those bigger tests. We’s want to set up a few API tests, that act as guardrails. They make sure that we’re on the right track, while we’re messing with internal things inside the system. Having these big scope tests can lead us in the right direction.
If we write them BDD-style, they can even tell the story in the domain language, not “ugly” REST API dialect. Remember those test cases? we wrote them in our own language, not using any technobabble. We can use BDD tools (e.g. Cucumber) for that. Or we can simply write tests in a readable manner.
Another advantage of having the bigger tests, is help us think about the APIs of the system. With Test-First (in any scale) we think about how we talk with the system. The integration tests can help us there as well.
If we had all the time in the world, we would cover all the code with all kinds of tests. But we don’t. So we need to choose which cases we want to use as the API tests, and which to leave as unit tests.
Let’s start off with a couple of API tests, then move to TDD at the unit level.
0 Comments